Face Recognition without Features

نویسندگان

  • Matthew Turk
  • Alex Pentland
چکیده

then recognize the person by comparing characteristics of the face to those of known individuals. Our approach treats the face recognition problem as an intrinsically twodimensional recognition problem, taking advantage of the fact that faces are are normally upright and thus may be described by a small set of 2-D characteristic views. The system functions by projecting face images onto a feature space that spans the significant variations among known face images. The significant features are known as "eigenfaces", because they are the eigenvectors (principal componmt,s) of the set of faces; they do not necessarily correspontl to features such as eyes, ears, and noses. I N T R O D U C T I O N Computers that recognize and identify people may be usefi~l in a number of applications: criminal identification, security systems, human-computer interaction, and photographic processing, to name a few. The face is a natural, reasonably reliable, and nonintrusive means of identification, and in a machine vision system faces can be used both to detect the presence of people (recognition) and to identify the individuals (identification). The appropriateness of the face model or representation is critical to meeting such general and robust criteria for the tasks of face recognition and identification. Typical modeling of object shape for computer vision and graphics includes surface, sweep, and volumetric representations. Polygonal approximations to analytic surfaces, B-spline surfaces, and elastically deformable models have been used for generating and animating faces in computer graphics. Such representations are not particularly appropriate for the inverse problem of recovering information from images and matching to stored models. Most approaches to automated face recognition and identification, since the late 1960's, have focused on locating and modeling individual features (such as eyes, the nose, the mouth) and their relationships (e.g. [2, 3, 41). These systems have met with limited success, largely because the face models are not robust to small changes such as different expressions, and performance degrades quickly as the input face image differs from the expected configuration. As a number of researchers have shown, individual features and their immediate relationships comprise an insufficient representation to account for the performance of adult human face identification [5 ] . Feature locations are not a sufficient basis for face recognition. Taking the position of the opposite extreme, we set out to identify and recognize faces without explicitly locating and modeling features. Instead we characterize the variation among a set of faces and calculate a small number of characteristic face images which serve as a basis set. These "eigenfaces" span the "face space", in that linear combinations of these images can approximate an appropriately large number of face images. An image (or subimage) is recognized as a face when it lies within or near the low-dimensional "face space". The location in face space determines the identity among the known faces. The calculation of the "eigenfaces" is performed offline. The projection into face space is a computationally simple process involving image multiplies and summations, and the identification is a simple low-dimensional pattern recognition process. The system therefore performs face identification quickly with available hardware. In the following sections, we will briefly discuss the computation of the eigenfaces, recognition based on the face space, learning new faces, and application to other tasks. D E F I N I N G T H E FACE SPACE In the language of information theory, we want to extract the relevant information in a face image, encode it as efficiently as possible, and compare one face encoding with a database of models encoded similarly. A simple approach to extracting the information contained in an image of a face is to somehow capture the variation in a collection of face images, independent of any judgement of features, and use this information to encode and compare individual face images. In mathematical terms, we wish to find the principal components of the distribution of faces, or the eigenvecFigure 1: Face images used as the training set. tors of the covariance matrix of the set of face images, treating an image as a point (or vector) in a very high dimensional space. The eigenvectors are ordered, each one accounting for a different amount of the variation among the face images. These eigenvectors can be thought of as a set of features which together characterize the variation between face images. Each image location contributes more or less to each eigenvector, so that we can display the eigenvector as a sort of ghostly face which we call an eigenface. Some of these faces are shown in Figure 2. Each eigenface deviates from uniform grey where some fadal feature differs among the set of training faces; they are a sort of map of the variations between faces. Each individual face can be represented exactly in terms of a linear combination of the eigenfaces. Each face can also be approximated using only the Ubest" eigenfaces those that have the largest eigenvalues, and which therefore account for the most variance within the set of face images. The best M eigenfaces span an M-dimensional subspace "face space" of all possible images. Within this framework, a face image should be "close" to the face space i.e. the distance between the image and its projection onto face space should be small and a non-face image should be "far" from the face space. This distance from face space E is used as a measure of Ufaceness". The idea of using eigenfaces was motivated by a technique developed by Sirovich and Kirby [6, 71 for efficiently representing pictures of faces using principal component Figure 2: Seven of the eigenfaces calculated from the training set of Figure 1. analysis. They argued that, at least in principle, any collection of face images can be approximately reconstructed by storing a small collection of weights for each face and a small set of standard pictures (the eigenpictures). While the use of eigenspace for image coding seems inefficient in general, it occurred to us that perhaps a useful way to learn and recognize faces would be to build up the characteristic features by experience over time and recognize particular faces by comparing the feature weights needed to (approximately) reconstruct them with the weights associated with known individuals. Each individual, therefore, would be characterized by the small set of feature or eigenpicture weights needed to describe and approximately reconstruct them an extremely compact representation when compared with the images them-

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Face Recognition by Cognitive Discriminant Features

Face recognition is still an active pattern analysis topic. Faces have already been treated as objects or textures, but human face recognition system takes a different approach in face recognition. People refer to faces by their most discriminant features. People usually describe faces in sentences like ``She's snub-nosed'' or ``he's got long nose'' or ``he's got round eyes'' and so like. These...

متن کامل

Hybridization of Facial Features and Use of Multi Modal Information for 3D Face Recognition

Despite of achieving good performance in controlled environment, the conventional 3D face recognition systems still encounter problems in handling the large variations in lighting conditions, facial expression and head pose The humans use the hybrid approach to recognize faces and therefore in this proposed method the human face recognition ability is incorporated by combining global and local ...

متن کامل

2D Dimensionality Reduction Methods without Loss

In this paper, several two-dimensional extensions of principal component analysis (PCA) and linear discriminant analysis (LDA) techniques has been applied in a lossless dimensionality reduction framework, for face recognition application. In this framework, the benefits of dimensionality reduction were used to improve the performance of its predictive model, which was a support vector machine (...

متن کامل

تشخیص چهره با استفاده از PCA و فیلتر گابور

Methods for face recognition which are based on face structure are among techniques without supervision and produce unfavorable results in the presence of linear changes in images. PCA is a linear transform and a powerful tool for data analysis but does not produce good results for face recognition when there are non-linear changes resulting from changes in position, intensity and gesture in th...

متن کامل

Face Recognition using Eigenfaces , PCA and Supprot Vector Machines

This paper is based on a combination of the principal component analysis (PCA), eigenface and support vector machines. Using N-fold method and with respect to the value of N, any person’s face images are divided into two sections. As a result, vectors of training features and test features are obtain ed. Classification precision and accuracy was examined with three different types of kernel and...

متن کامل

A comprehensive experimental comparison of the aggregation techniques for face recognition

In face recognition, one of the most important problems to tackle is a large amount of data and the redundancy of information contained in facial images. There are numerous approaches attempting to reduce this redundancy. One of them is information aggregation based on the results of classifiers built on selected facial areas being the most salient regions from the point of view of classificati...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1990